scalable deep generative relational model
- Asia > China > Shanghai > Shanghai (0.04)
- Oceania > Australia > New South Wales (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.47)
Scalable Deep Generative Relational Model with High-Order Node Dependence
In this work, we propose a probabilistic framework for relational data modelling and latent structure exploring. Given the possible feature information for the nodes in a network, our model builds up a deep architecture that can approximate to the possible nonlinear mappings between the nodes' feature information and latent representations. For each node, we incorporate all its neighborhoods' high-order structure information to generate latent representation, such that these latent representations are ``smooth'' in terms of the network. Since the latent representations are generated from Dirichlet distributions, we further develop a data augmentation trick to enable efficient Gibbs sampling for Ber-Poisson likelihood with Dirichlet random variables. Our model can be ready to apply to large sparse network as its computations cost scales to the number of positive links in the networks. The superior performance of our model is demonstrated through improved link prediction performance on a range of real-world datasets.
Scalable Deep Generative Relational Model with High-Order Node Dependence
Xuhui Fan, Bin Li, Caoyuan Li, Scott SIsson, Ling Chen
We propose a probabilistic framework for modelling and exploring the latent structure of relational data. Given feature information for the nodes in a network, the scalable deep generative relational model (SDREM) builds a deep network architecture that can approximate potential nonlinear mappings between nodes' feature information and the nodes' latent representations. Our contribution is two-fold: (1) We incorporate high-order neighbourhood structure information to generate the latent representations at each node, which vary smoothly over the network.
- Asia > China > Shanghai > Shanghai (0.04)
- Oceania > Australia > New South Wales (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Reviews: Scalable Deep Generative Relational Model with High-Order Node Dependence
First of all, I don't think it's a good way to represent each node with a dirichlet distribution leading to a positive node embedding. It's quite different from traditional real-valued embedding methods and I assume positive embedding representations will directly reduce semantic information compared to real-valued. So if there are any other positive embedding methods, please refer them to illustrate the relation to the proposed method. As mentioned in the article, the proposed SDREM propagating information through neighbors works in a similar spirit to the spatial graph convolutional network (GCN) in a frequentist setting. But as far as I am concerned, GCNs that have already been applied, will not only consider neighboring information in graphs, but also propagate each node embedding to a deeper representation through a fully connected network.
Reviews: Scalable Deep Generative Relational Model with High-Order Node Dependence
The paper was reviewed by three experts in the field. The reviewers and AC all agree that the paper contains novel contributions, but share the same opinion that it could be strengthened by addressing the reviewers' comments. In addition to the reviewers' comments such as the need to adding comparison with VGAE and its variates, the AC would like to provide some additional feedback to the authors: The AC views the paper as some kind of smart combination of edge partition model, gamma belief net, and Dirichlet belief net, enhanced by adding covariate dependence and by incorporate the network information in learning the connection weights of the Dirichlet belief net. Pros: 1) the combination is non-trival: replacing the gamma weights in edge partition model with latent counts is the key to allow closed-form Gibbs sampling (upward latent count propagation followed by downward variable sampling). How the X is used in (3) and sampled in (5) is novel.
- Information Technology > Artificial Intelligence (0.66)
- Information Technology > Databases (0.40)
Scalable Deep Generative Relational Model with High-Order Node Dependence
In this work, we propose a probabilistic framework for relational data modelling and latent structure exploring. Given the possible feature information for the nodes in a network, our model builds up a deep architecture that can approximate to the possible nonlinear mappings between the nodes' feature information and latent representations. For each node, we incorporate all its neighborhoods' high-order structure information to generate latent representation, such that these latent representations are smooth'' in terms of the network. Since the latent representations are generated from Dirichlet distributions, we further develop a data augmentation trick to enable efficient Gibbs sampling for Ber-Poisson likelihood with Dirichlet random variables. Our model can be ready to apply to large sparse network as its computations cost scales to the number of positive links in the networks.
Scalable Deep Generative Relational Model with High-Order Node Dependence
Fan, Xuhui, Li, Bin, Li, Caoyuan, SIsson, Scott, Chen, Ling
In this work, we propose a probabilistic framework for relational data modelling and latent structure exploring. Given the possible feature information for the nodes in a network, our model builds up a deep architecture that can approximate to the possible nonlinear mappings between the nodes' feature information and latent representations. For each node, we incorporate all its neighborhoods' high-order structure information to generate latent representation, such that these latent representations are smooth'' in terms of the network. Since the latent representations are generated from Dirichlet distributions, we further develop a data augmentation trick to enable efficient Gibbs sampling for Ber-Poisson likelihood with Dirichlet random variables. Our model can be ready to apply to large sparse network as its computations cost scales to the number of positive links in the networks.